9 research outputs found

    On Neural Associative Memory Structures: Storage and Retrieval of Sequences in a Chain of Tournaments

    Get PDF
    Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward—in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm.publishedVersio

    Learning and cognition in brain and machine: Prediction of dementia from longitudinal data and modelling memory networks

    No full text
    Starting in the mid-20th century and throughout their developments, modern neuroscience and artificial intelligence (AI) have provided each other with inspiration, insights, and tools. The degree to which they are intertwined has been in constant flux over the years, but always present. With the enormous resurgence of interest in machine learning over the past decade, led by the much-celebrated successes of artificial neural networks and deep learning, the bond between the two fields seems to be growing stronger. Artificial intelligence and machine learning have always kept an eye on biological intelligence and learning, as these provide our only examples of general intelligence and strong learning capabilities, inspiring the development of their much less capable–albeit improving–counterparts, which are based on computational models. The growing attention to both neuroscience and AI is also leading to growth where they intersect, i.e. in neuroscience-inspired AI and AI-inspired neuroscience, and in the usage of computational AI models within neuroscience and the cognitive sciences. In this context, the present thesis aims to make a modest contribution through our application of machine learning techniques to the study of dementia using data from longitudinal MRI and psychometric testing, and through our proposed models for simulating aspects of the formation of memory networks during learning and memory retrieval. The former is our main contribution and is addressed in studies A, B, and C, while the latter is reflected in studies D and E. Through longitudinal studies, i.e. studies based on the collection of repeated measurements from the same subjects, or experimental units, over time one can observe how measurements develop and discover new relationships between variables. Longitudinal data analysis is a large field of research comprised of a multitude of methods and is widely applicable to e.g. behavioural analysis and medicine. One inherently longitudinal phenomenon of particular interest for the present work is the biological, neurological, and cognitive alteration linked to aging. There is an immense need to develop methods that can indicate the risk of developing aging-related diseases such as dementia, as well as for increasing the understanding that is derived from new computational models for cognitive skills such as memory and learning. The first part of this thesis (studies A, B, and C) develops and evaluates methods for using machine learning models with longitudinal data that have a time-dependent structure. We propose two novel and flexible frameworks to describe the trajectories of change extracted from the longitudinal data. The two frameworks are, respectively, based on (i) a combination of mixed-effects models in order to extract features from the longitudinal trajectories that can be used to train any type of machine learning classifier and (ii) mapping the multi-dimensional data onto two-dimensional images, enabling classifications based on convolutional neural networks. The second part of this thesis (studies D and E) aims to construct simple and flexible models that can be used to simulate learning and memory retrieval processes in the human brain. These proposed memory networks are: (i) defining a new associative memory for storing sequences and investigate how to make efficient retrievals, and (ii) a combination of a reinforcement learning model to form memory connections in the training phase and an iterative diffusion process to update the memory network to be used in the test phase. We found that the frameworks proposed in the first part of the thesis, although being relatively simple approaches to the complexities of longitudinal data analysis, are comparable to other approaches in the literature as regards accurately predicting dementia. The proposed model for learning and retrieval based on associative memory in Paper D has several features that make it resemble its biological brain counterpart more than comparable models in the literature do, while significantly reducing errors in sequence-retrieval. The model for episodic memory developed in Paper E is quite flexible and can provide simulations of actual experiments on typical and atypical human behaviours

    On Neural Associative Memory Structures: Storage and Retrieval of Sequences in a Chain of Tournaments

    No full text
    Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward—in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm

    On Neural Associative Memory Structures: Storage and Retrieval of Sequences in a Chain of Tournaments

    No full text
    Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward—in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm

    Enhanced Equivalence Projective Simulation: A Framework for Modeling Formation of Stimulus Equivalence Classes

    No full text
    Formation of stimulus equivalence classes has been recently modeled through equivalence projective simulation (EPS), a modified version of a projective simulation (PS) learning agent. PS is endowed with an episodic memory that resembles the internal representation in the brain and the concept of cognitive maps. PS flexibility and interpretability enable the EPS model and, consequently the model we explore in this letter, to simulate a broad range of behaviors in matching-to-sample experiments. The episodic memory, the basis for agent decision making, is formed during the training phase. Derived relations in the EPS model that are not trained directly but can be established via the network's connections are computed on demand during the test phase trials by likelihood reasoning. In this letter, we investigate the formation of derived relations in the EPS model using network enhancement (NE), an iterative diffusion process, that yields an offline approach to the agent decision making at the testing phase. The NE process is applied after the training phase to denoise the memory network so that derived relations are formed in the memory network and retrieved during the testing phase. During the NE phase, indirect relations are enhanced, and the structure of episodic memory changes. This approach can also be interpreted as the agent's replay after the training phase, which is in line with recent findings in behavioral and neuroscience studies. In comparison with EPS, our model is able to model the formation of derived relations and other features such as the nodal effect in a more intrinsic manner. Decision making in the test phase is not an ad hoc computational method, but rather a retrieval and update process of the cached relations from the memory network based on the test trial. In order to study the role of parameters on agent performance, the proposed model is simulated and the results discussed through various experimental settings

    Enhanced Equivalence Projective Simulation: A Framework for Modeling Formation of Stimulus Equivalence Classes

    No full text
    Formation of stimulus equivalence classes has been recently modeled through equivalence projective simulation (EPS), a modified version of a projective simulation (PS) learning agent. PS is endowed with an episodic memory that resembles the internal representation in the brain and the concept of cognitive maps. PS flexibility and interpretability enable the EPS model and, consequently the model we explore in this letter, to simulate a broad range of behaviors in matching-to-sample experiments. The episodic memory, the basis for agent decision making, is formed during the training phase. Derived relations in the EPS model that are not trained directly but can be established via the network's connections are computed on demand during the test phase trials by likelihood reasoning. In this letter, we investigate the formation of derived relations in the EPS model using network enhancement (NE), an iterative diffusion process, that yields an offline approach to the agent decision making at the testing phase. The NE process is applied after the training phase to denoise the memory network so that derived relations are formed in the memory network and retrieved during the testing phase. During the NE phase, indirect relations are enhanced, and the structure of episodic memory changes. This approach can also be interpreted as the agent's replay after the training phase, which is in line with recent findings in behavioral and neuroscience studies. In comparison with EPS, our model is able to model the formation of derived relations and other features such as the nodal effect in a more intrinsic manner. Decision making in the test phase is not an ad hoc computational method, but rather a retrieval and update process of the cached relations from the memory network based on the test trial. In order to study the role of parameters on agent performance, the proposed model is simulated and the results discussed through various experimental settings

    Cognitive and MRI trajectories for prediction of Alzheimer’s disease

    No full text
    The concept of Mild Cognitive Impairment (MCI) is used to describe the early stages of Alzheimer’s disease (AD), and identification and treatment before further decline is an important clinical task. We selected longitudinal data from the ADNI database to investigate how well normal function (HC, n= 134) vs. conversion to MCI (cMCI, n= 134) and stable MCI (sMCI, n=333) vs. conversion to AD (cAD, n= 333) could be predicted from cognitive tests, and whether the predictions improve by adding information from magnetic resonance imaging (MRI) examinations. Features representing trajectories of change in the selected cognitive and MRI measures were derived from mixed effects models and used to train ensemble machine learning models to classify the pairs of subgroups based on a subset of the data set. Evaluation in an independent test set showed that the predictions for HC vs. cMCI improved substantially when MRI features were added, with an increase in F1-score from 60 to 77%. The F1-scores for sMCI vs. cAD were 77% without and 78% with inclusion of MRI features. The results are in-line with findings showing that cognitive changes tend to manifest themselves several years after the Alzheimer’s disease is well-established in the brain

    Cognitive and MRI trajectories for prediction of Alzheimer’s disease

    Get PDF
    The concept of Mild Cognitive Impairment (MCI) is used to describe the early stages of Alzheimer’s disease (AD), and identification and treatment before further decline is an important clinical task. We selected longitudinal data from the ADNI database to investigate how well normal function (HC, n= 134) vs. conversion to MCI (cMCI, n= 134) and stable MCI (sMCI, n=333) vs. conversion to AD (cAD, n= 333) could be predicted from cognitive tests, and whether the predictions improve by adding information from magnetic resonance imaging (MRI) examinations. Features representing trajectories of change in the selected cognitive and MRI measures were derived from mixed effects models and used to train ensemble machine learning models to classify the pairs of subgroups based on a subset of the data set. Evaluation in an independent test set showed that the predictions for HC vs. cMCI improved substantially when MRI features were added, with an increase in F1-score from 60 to 77%. The F1-scores for sMCI vs. cAD were 77% without and 78% with inclusion of MRI features. The results are in-line with findings showing that cognitive changes tend to manifest themselves several years after the Alzheimer’s disease is well-established in the brain

    Cognitive and MRI trajectories for prediction of Alzheimer’s disease

    No full text
    The concept of Mild Cognitive Impairment (MCI) is used to describe the early stages of Alzheimer’s disease (AD), and identification and treatment before further decline is an important clinical task. We selected longitudinal data from the ADNI database to investigate how well normal function (HC, n= 134) vs. conversion to MCI (cMCI, n= 134) and stable MCI (sMCI, n=333) vs. conversion to AD (cAD, n= 333) could be predicted from cognitive tests, and whether the predictions improve by adding information from magnetic resonance imaging (MRI) examinations. Features representing trajectories of change in the selected cognitive and MRI measures were derived from mixed effects models and used to train ensemble machine learning models to classify the pairs of subgroups based on a subset of the data set. Evaluation in an independent test set showed that the predictions for HC vs. cMCI improved substantially when MRI features were added, with an increase in F1-score from 60 to 77%. The F1-scores for sMCI vs. cAD were 77% without and 78% with inclusion of MRI features. The results are in-line with findings showing that cognitive changes tend to manifest themselves several years after the Alzheimer’s disease is well-established in the brain
    corecore